Goto

Collaborating Authors

 margin loss









00989c20ff1386dc386d8124ebcba1a5-AuthorFeedback.pdf

Neural Information Processing Systems

We thank all the reviewers for their helpful feedback and positive view of our work. We believe that these additions address all of the main reviewer concerns. The actions are "turn left," "turn right," Hausman et al. is discussed at line 95 of the TE only uses the imitation learning loss. The Duan et al. architecture fails in In our results we use behavioral cloning, and we plan to try IRL methods such as GAIL in future work. The agent must navigate through multiple waypoints.


FracAug: Fractional Augmentation boost Graph-level Anomaly Detection under Limited Supervision

Dong, Xiangyu, Zhang, Xingyi, Wang, Sibo

arXiv.org Artificial Intelligence

Graph-level anomaly detection (GAD) is critical in diverse domains such as drug discovery, yet high labeling costs and dataset imbalance hamper the performance of Graph Neural Networks (GNNs). To address these issues, we propose FracAug, an innovative plug-in augmentation framework that enhances GNNs by generating semantically consistent graph variants and pseudo-labeling with mutual verification. Unlike previous heuristic methods, FracAug learns semantics within given graphs and synthesizes fractional variants, guided by a novel weighted distance-aware margin loss. This captures multi-scale topology to generate diverse, semantic-preserving graphs unaffected by data imbalance. Then, FracAug utilizes predictions from both original and augmented graphs to pseudo-label unlabeled data, iteratively expanding the training set. As a model-agnostic module compatible with various GNNs, FracAug demonstrates remarkable universality and efficacy: experiments across 14 GNNs on 12 real-world datasets show consistent gains, boosting average AUROC, AUPRC, and F1-score by up to 5.72%, 7.23%, and 4.18%, respectively.